67,313 research outputs found

    A Study of deuteron electromagnetic form factors with light-front approach

    Full text link
    The electromagnetic form factors and low-energy observables of deuteron are studied with the help of the light-front approach, where the deuteron is regarded as a weekly bound state of a proton and a neutron. Both the SS- and DD-wave interacting vertexes among deuteron, proton, and neutron are taken into account. Moreover, the regularization functions are also introduced. In our calculations, the vertex and the regularization functions are employed to simulate the momentum distribution inside the deuteron. Our numerical results show that the light-front approach can roughly reproduce the deuteron electromagnetic form factors, like charge G0G_0, magnetic G1G_1, and quadrupole G2G_2, in the low Q2Q^2 region. The important role of the DD-wave vertex on G2G_2 is also addressed

    Polarized GPDs and structure functions of ρ\rho meson

    Full text link
    The ρ\rho meson polarized generalized parton distribution functions, its structure functions g1g_1 and g2g_2 and its axial form factors G~1,2{\tilde G}_{1,2} are studied based on a light-front quark model for the first time. Comparing our obtained moments of g1g_1 to lattice QCD calculation, we find that our results are reasonably consistent to the Lattice predictions

    Recent Progresses in Deep Learning based Acoustic Models (Updated)

    Full text link
    In this paper, we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques. We first discuss acoustic models that can effectively exploit variable-length contextual information, such as recurrent neural networks (RNNs), convolutional neural networks (CNNs), and their various combination with other models. We then describe acoustic models that are optimized end-to-end with emphasis on feature representations learned jointly with rest of the system, the connectionist temporal classification (CTC) criterion, and the attention-based sequence-to-sequence model. We further illustrate robustness issues in speech recognition systems, and discuss acoustic model adaptation, speech enhancement and separation, and robust training strategies. We also cover modeling techniques that lead to more efficient decoding and discuss possible future directions in acoustic model research.Comment: This is an updated version with latest literature until ICASSP2018 of the paper: Dong Yu and Jinyu Li, "Recent Progresses in Deep Learning based Acoustic Models," vol.4, no.3, IEEE/CAA Journal of Automatica Sinica, 201

    Universal Quantum Filter

    Full text link
    Universal quantum filter (UQF) is introduced and proved to exist. Optical realization of UQF is proposed in experiment.Comment: 5 page

    On some Liouville Type Theorems for the Compressible Navier-Stokes Equations

    Full text link
    We prove several Liouville type results for stationary solutions of the dd-dimensional compressible Navier-Stokes equations. In particular, we show that when the dimension d4d \geqslant 4, the natural requirements \rho \in L^{\infty} (\mathbbm{R}^d), v \in \dot{H}^1 (\mathbbm{R}^d) suffice to guarantee that the solution is trivial. For dimensions d=2,3d=2,3, we assume the extra condition vL3dd1(Rd)v \in L^{\frac{3d}{d-1}}(\mathbb R^d). This improves a recent result of Chae (2012).Comment: 16 page

    Rossby Wave Instability in Accretion Discs with Large-Scale Poloidal Magnetic Fields

    Full text link
    We study the effect of large-scale magnetic fields on the non-axisymmetric Rossby wave instability (RWI) in accretion discs. The instability develops around a density bump, which is likely present in the transition region between the active zone and dead zone of protoplanetary discs. Previous works suggest that the vortices resulting from the RWI may facilitate planetesimal formation and angular momentum transport. We consider discs threaded by a large-scale poloidal magnetic field, with a radial field component at the disc surface. Such field configurations may lead to the production of magnetic winds or jets. In general, the magnetic field can affect the RWI even when it is sub-thermal (plasma β10\beta\sim 10). For infinitely thin discs, the instability can be enhanced by about 10 percent. For discs with finite thickness, with a radial gradient of the magnetic field strength, the RWI growth rate can increase significantly (by a factor of 2\sim 2) as the field approaches equipartition (β1\beta \sim 1). Our result suggests that the RWI can continue to operate in discs that produce magnetic winds.Comment: Accepted for publication in MNRAS, 7 pages, 8 figure

    Inertial-Acoustic Oscillations of Black-Hole Accretion Discs with Large-Scale Poloidal Magnetic Fields

    Full text link
    We study the effect of large-scale magnetic fields on the non-axisymmetric inertial-acoustic modes (also called p-modes) trapped in the innermost regions of accretion discs around black holes (BHs). These global modes could provide an explanation for the high-frequency quasi-periodic oscillations (HFQPOs) observed in BH X-ray binaries. There may be observational evidence for the presence of such large-scale magnetic fields in the disks since episodic jets are observed in the same spectral state when HFQPOs are detected. We find that a large-scale poloidal magnetic field can enhance the corotational instability and increase the growth rate of the purely hydrodynamic overstable p-modes. In addition, we show that the frequencies of these overstable p-modes could be further reduced by such magnetic fields, making them agree better with observations.Comment: 7 pages, 5 figures. Revised according to referee's report. Accepted for publication in MNRAS. arXiv admin note: substantial text overlap with arXiv:1212.121

    Prediction-Adaptation-Correction Recurrent Neural Networks for Low-Resource Language Speech Recognition

    Full text link
    In this paper, we investigate the use of prediction-adaptation-correction recurrent neural networks (PAC-RNNs) for low-resource speech recognition. A PAC-RNN is comprised of a pair of neural networks in which a {\it correction} network uses auxiliary information given by a {\it prediction} network to help estimate the state probability. The information from the correction network is also used by the prediction network in a recurrent loop. Our model outperforms other state-of-the-art neural networks (DNNs, LSTMs) on IARPA-Babel tasks. Moreover, transfer learning from a language that is similar to the target language can help improve performance further

    Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension

    Full text link
    Machine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C^3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chinese-as-a-second-language examinations. We present a comprehensive analysis of the prior knowledge (i.e., linguistic, domain-specific, and general world knowledge) needed for these real-world problems. We implement rule-based and popular neural methods and find that there is still a significant performance gap between the best performing model (68.5%) and human readers (96.0%), especially on problems that require prior knowledge. We further study the effects of distractor plausibility and data augmentation based on translated relevant datasets for English on model performance. We expect C^3 to present great challenges to existing systems as answering 86.8% of questions requires both knowledge within and beyond the accompanying document, and we hope that C^3 can serve as a platform to study how to leverage various kinds of prior knowledge to better understand a given written or orally oriented text. C^3 is available at https://dataset.org/c3/.Comment: To appear in TAC

    Improving Machine Reading Comprehension with General Reading Strategies

    Full text link
    Reading strategies have been shown to improve comprehension levels, especially for readers lacking adequate prior knowledge. Just as the process of knowledge accumulation is time-consuming for human readers, it is resource-demanding to impart rich general domain knowledge into a deep language model via pre-training. Inspired by reading strategies identified in cognitive science, and given limited computational resources -- just a pre-trained model and a fixed number of training instances -- we propose three general strategies aimed to improve non-extractive machine reading comprehension (MRC): (i) BACK AND FORTH READING that considers both the original and reverse order of an input sequence, (ii) HIGHLIGHTING, which adds a trainable embedding to the text embedding of tokens that are relevant to the question and candidate answers, and (iii) SELF-ASSESSMENT that generates practice questions and candidate answers directly from the text in an unsupervised manner. By fine-tuning a pre-trained language model (Radford et al., 2018) with our proposed strategies on the largest general domain multiple-choice MRC dataset RACE, we obtain a 5.8% absolute increase in accuracy over the previous best result achieved by the same pre-trained model fine-tuned on RACE without the use of strategies. We further fine-tune the resulting model on a target MRC task, leading to an absolute improvement of 6.2% in average accuracy over previous state-of-the-art approaches on six representative non-extractive MRC datasets from different domains (i.e., ARC, OpenBookQA, MCTest, SemEval-2018 Task 11, ROCStories, and MultiRC). These results demonstrate the effectiveness of our proposed strategies and the versatility and general applicability of our fine-tuned models that incorporate these strategies. Core code is available at https://github.com/nlpdata/strategy/.Comment: To appear in NAACL-HLT 201
    corecore